64 research outputs found

    Toward bio-inspired information processing with networks of nano-scale switching elements

    Full text link
    Unconventional computing explores multi-scale platforms connecting molecular-scale devices into networks for the development of scalable neuromorphic architectures, often based on new materials and components with new functionalities. We review some work investigating the functionalities of locally connected networks of different types of switching elements as computational substrates. In particular, we discuss reservoir computing with networks of nonlinear nanoscale components. In usual neuromorphic paradigms, the network synaptic weights are adjusted as a result of a training/learning process. In reservoir computing, the non-linear network acts as a dynamical system mixing and spreading the input signals over a large state space, and only a readout layer is trained. We illustrate the most important concepts with a few examples, featuring memristor networks with time-dependent and history dependent resistances

    Diffusion Controlled Reactions, Fluctuation Dominated Kinetics, and Living Cell Biochemistry

    Full text link
    In recent years considerable portion of the computer science community has focused its attention on understanding living cell biochemistry and efforts to understand such complication reaction environment have spread over wide front, ranging from systems biology approaches, through network analysis (motif identification) towards developing language and simulators for low level biochemical processes. Apart from simulation work, much of the efforts are directed to using mean field equations (equivalent to the equations of classical chemical kinetics) to address various problems (stability, robustness, sensitivity analysis, etc.). Rarely is the use of mean field equations questioned. This review will provide a brief overview of the situations when mean field equations fail and should not be used. These equations can be derived from the theory of diffusion controlled reactions, and emerge when assumption of perfect mixing is used

    On Improving the Computing Capacity of Dynamical Systems

    Get PDF
    Reservoir Computing has emerged as a practical approach for solving temporal pattern recognition problems. The procedure of preparing the system for pattern recognition is simple, provided that the dynamical system (reservoir) used for computation is complex enough. However, to achieve a sufficient reservoir complexity, one has to use many interacting elements. We propose a novel method to reduce the number of reservoir elements without reducing the computing capacity of the device. It is shown that if an auxiliary input channel can be engineered, the drive, advantageous correlations between the signal one wishes to analyse and the state of the reservoir can emerge, increasing the intelligence of the system. The method has been illustrated on the problem of electrocardiogram (ECG) signal classification. By using a reservoir with only one element, and an optimised drive, more than 93% of the signals have been correctly labelled

    On developing theory of reservoir computing for sensing applications: the state weaving environment echo tracker (SWEET) algorithm

    Get PDF
    As a paradigm of computation, reservoir computing has gained an enormous momentum. In principle, any sufficiently complex dynamical system equipped with a readout layer can be used for any computation. This can be achieved by only adjusting the readout layer. Owning to this inherent flexibility of implementation, new applications of reservoir computing are being reported at a constant rate. However, relatively few studies focus on sensing, and in the ones that do, the reservoir is often exploited in a somewhat passive manner. The reservoir is used to post-process the signal from sensing elements that are placed separately, and the reservoir could be replaced by other information processing system without loss of functionality of the sensor (\u27reservoir computing and sensing\u27). An entirely different novel class of sensing approaches is being suggested, to be referred to as \u27reservoir computing for sensing\u27, where the reservoir plays a central role. In the State Weaving Environment Echo Tracker (SWEET) sensing approach, the reservoir functions as the sensing element if the dynamical states of the reservoir and the environment one wishes to analyze are strongly interwoven. Some distinct characteristics of reservoir computing (in particular the separability and the echo state properties) are carefully exploited to achieve sensing functionality. The SWEET approach is formulated both as a generic device setup, and as an abstract mathematical algorithm. This algorithmic template could be used to develop a theory (or a class of theories) of \u27reservoir computing for sensing\u27, which could provide guidelines for engineering novel sensing applications. It could also provide ideas for a creative recycling of the existing sensing solutions. For example, the Horizon 2020 project RECORD-IT (Reservoir Computing with Real-time Data for future IT) exploits the SWEET sensing algorithm for ion detection. Accordingly, the terms SWEET sensing algorithm and the RECORD-IT sensing algorithm can be used interchangeably

    On using reservoir computing for sensing applications: exploring environment-sensitive memristor networks

    Get PDF
    Recently, the SWEET sensing setup has been proposed as a way of exploiting reservoir computing for sensing. The setup features three components: an input signal (the drive), the environment and a reservoir, where the reservoir and the environment are treated as one dynamical system, a superreservoir. Due to the reservoir-environment interaction, the information about the environment is encoded in the state of the reservoir. This information can be inferred (decoded) by analysing the reservoir state. The decoding is done by using an external drive signal. This signal is optimised to achieve a separation in the space of the reservoir states: Under different environmental conditions, the reservoir should visit distinct regions of the configuration space. We examined this approach theoretically by using an environment-sensitive memristor as a reservoir, where the memristance is the state variable. The goal has been to identify a suitable drive that can achieve the phase space separation, which was formulated as an optimization problem, and solved by a genetic optimization algorithm developed in this study. For simplicity reasons, only two environmental conditions were considered (describing a static and a varying environment). A suitable drive signal has been identified based on intuitive analysis of the memristor dynamics, and by solving the optimization problem. Under both drives the memristance is driven to two different regions of the onedimensional state space under the influence of the two environmental conditions, which can be used to infer about the environment. The separation occurs if there is a synchronisation between the drive and the environmental signals. To quantify the magnitude of the separation, we introduced a quality of sensing index: The ability to sense depends critically on the synchronisation between the drive and environmental conditions. If this synchronisation is not maintained the quality of sensing deteriorate

    Mathematical explanation of the predictive power of the X-level approach reaction noise estimator method

    Get PDF
    The X-level Approach Reaction Noise Estimator (XARNES) method has been developed previously to study reaction noise in well mixed reaction volumes. The method is a typical moment closure method and it works by closing the infinite hierarchy of equations that describe moments of the particle number distribution function. This is done by using correlation forms which describe correlation effects in a strict mathematical way. The variable X is used to specify which correlation effects (forms) are included in the description. Previously, it was argued, in a rather informal way, that the method should work well in situations where the particle number distribution function is Poisson-like. Numerical tests confirmed this. It was shown that the predictive power of the method increases, i.e. the agreement between the theory and simulations improves, if X is increased. In here, these features of the method are explained by using rigorous mathematical reasoning. Three derivative matching theoremsare proven which show that the observed numerical behavior is generic to the method

    Safe uses of Hill's model: an exact comparison with the Adair-Klotz model

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>The Hill function and the related Hill model are used frequently to study processes in the living cell. There are very few studies investigating the situations in which the model can be safely used. For example, it has been shown, at the mean field level, that the dose response curve obtained from a Hill model agrees well with the dose response curves obtained from a more complicated Adair-Klotz model, provided that the parameters of the Adair-Klotz model describe strongly cooperative binding. However, it has not been established whether such findings can be extended to other properties and non-mean field (stochastic) versions of the same, or other, models.</p> <p>Results</p> <p>In this work a rather generic quantitative framework for approaching such a problem is suggested. The main idea is to focus on comparing the particle number distribution functions for Hill's and Adair-Klotz's models instead of investigating a particular property (e.g. the dose response curve). The approach is valid for any model that can be mathematically related to the Hill model. The Adair-Klotz model is used to illustrate the technique. One main and two auxiliary similarity measures were introduced to compare the distributions in a quantitative way. Both time dependent and the equilibrium properties of the similarity measures were studied.</p> <p>Conclusions</p> <p>A strongly cooperative Adair-Klotz model can be replaced by a suitable Hill model in such a way that any property computed from the two models, even the one describing stochastic features, is approximately the same. The quantitative analysis showed that boundaries of the regions in the parameter space where the models behave in the same way exhibit a rather rich structure.</p
    corecore